About the IODS-project

This course is just what I need now.

We are writing couple of articles in psychoacoustics and need to use GAM and PCA analysis. This helps me to go deeper in R programming and brings also better understanding, what my colleague is doing in GAM (General Additive Model). Earlier MOOC courses in statistics have been interesting and useful. This is an excellent continuum.

I have a new GitHub repository dedicated to this project:

https://github.com/jussijaatinen/IODS-project


Analysis of Learning 2014 Database


The data has been collected in query about teaching and learning after the course of “Introduction to statistics for social science”“, fall 2014. The most essential parts of study (Deep, Surface and Strategic learning) have been named and combined so, that connection to the thought dimensions can be seen.


Summary of dataset.

There were twice as much females than men in answer group. Dataset has 166 observations and 7 variables.

 gender       Age           Attitude          deep            stra      
 F:110   Min.   :17.00   Min.   :1.400   Min.   :1.583   Min.   :1.250  
 M: 56   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333   1st Qu.:2.625  
         Median :22.00   Median :3.200   Median :3.667   Median :3.188  
         Mean   :25.51   Mean   :3.143   Mean   :3.680   Mean   :3.121  
         3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083   3rd Qu.:3.625  
         Max.   :55.00   Max.   :5.000   Max.   :4.917   Max.   :5.000  
      surf           Points     
 Min.   :1.583   Min.   : 7.00  
 1st Qu.:2.417   1st Qu.:19.00  
 Median :2.833   Median :23.00  
 Mean   :2.787   Mean   :22.72  
 3rd Qu.:3.167   3rd Qu.:27.75  
 Max.   :4.333   Max.   :33.00  
'data.frame':   166 obs. of  7 variables:
 $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
 $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
 $ Attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
 $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
 $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
 $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
 $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...

Correlations between variables.

Due to largest correlation with dependent variable “Points”, three variables were chosen to the model. The independent variables were Attitude (R=0.44), Strategic learning (R=0.15) and Surface Learning (R=-0.14). Genders were combined in correlation test.


Regression models

For testing, which contributions of chosen independent variables were significant, several possible alternatives were evaluated for the best regression model.


Call:
lm(formula = learning2014$Points ~ learning2014$Attitude + learning2014$stra + 
    learning2014$surf, data = learning2014)

Residuals:
     Min       1Q   Median       3Q      Max 
-17.1550  -3.4346   0.5156   3.6401  10.8952 

Coefficients:
                      Estimate Std. Error t value Pr(>|t|)    
(Intercept)            11.0171     3.6837   2.991  0.00322 ** 
learning2014$Attitude   3.3952     0.5741   5.913 1.93e-08 ***
learning2014$stra       0.8531     0.5416   1.575  0.11716    
learning2014$surf      -0.5861     0.8014  -0.731  0.46563    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 5.296 on 162 degrees of freedom
Multiple R-squared:  0.2074,    Adjusted R-squared:  0.1927 
F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

Call:
lm(formula = learning2014$Points ~ learning2014$Attitude + learning2014$stra, 
    data = learning2014)

Residuals:
     Min       1Q   Median       3Q      Max 
-17.6436  -3.3113   0.5575   3.7928  10.9295 

Coefficients:
                      Estimate Std. Error t value Pr(>|t|)    
(Intercept)             8.9729     2.3959   3.745  0.00025 ***
learning2014$Attitude   3.4658     0.5652   6.132 6.31e-09 ***
learning2014$stra       0.9137     0.5345   1.709  0.08927 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 5.289 on 163 degrees of freedom
Multiple R-squared:  0.2048,    Adjusted R-squared:  0.1951 
F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

Call:
lm(formula = learning2014$Points ~ learning2014$Attitude + learning2014$surf, 
    data = learning2014)

Residuals:
    Min      1Q  Median      3Q     Max 
-17.277  -3.236   0.386   3.977  10.642 

Coefficients:
                      Estimate Std. Error t value Pr(>|t|)    
(Intercept)            14.1196     3.1271   4.515 1.21e-05 ***
learning2014$Attitude   3.4264     0.5764   5.944 1.63e-08 ***
learning2014$surf      -0.7790     0.7956  -0.979    0.329    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 5.32 on 163 degrees of freedom
Multiple R-squared:  0.1953,    Adjusted R-squared:  0.1854 
F-statistic: 19.78 on 2 and 163 DF,  p-value: 2.041e-08

Call:
lm(formula = learning2014$Points ~ learning2014$Attitude + learning2014$stra + 
    learning2014$gender, data = learning2014)

Residuals:
     Min       1Q   Median       3Q      Max 
-17.7179  -3.3285   0.5343   3.7412  10.9007 

Coefficients:
                      Estimate Std. Error t value Pr(>|t|)    
(Intercept)             8.9798     2.4030   3.737 0.000258 ***
learning2014$Attitude   3.5100     0.5956   5.893 2.13e-08 ***
learning2014$stra       0.8911     0.5441   1.638 0.103419    
learning2014$genderM   -0.2236     0.9248  -0.242 0.809231    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 5.304 on 162 degrees of freedom
Multiple R-squared:  0.2051,    Adjusted R-squared:  0.1904 
F-statistic: 13.93 on 3 and 162 DF,  p-value: 3.982e-08

Call:
lm(formula = learning2014$Points ~ learning2014$Attitude, data = learning2014)

Residuals:
     Min       1Q   Median       3Q      Max 
-16.9763  -3.2119   0.4339   4.1534  10.6645 

Coefficients:
                      Estimate Std. Error t value Pr(>|t|)    
(Intercept)            11.6372     1.8303   6.358 1.95e-09 ***
learning2014$Attitude   3.5255     0.5674   6.214 4.12e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 5.32 on 164 degrees of freedom
Multiple R-squared:  0.1906,    Adjusted R-squared:  0.1856 
F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

As seen above, only Attitude was statistically significant of chosen variables.

If comparing R squared (R2) values between models, it can be seen that Attitude variable alone explains almost as large amount of variance as if combined with other two of tested variable. Still it explains only a fraction (< 20%) of total variance of this data.


Validity of model

For testing validity of model and assumption of normal distributed samples following tests have been done: Residuals vs Fitted, Normal Q-Q and Residuals vs Leverage. As we can see in Residuals vs Fitted, residuals are reasonable normally distributed. In Normal Q-Q plot we can see that residuals follow linearity well enough (except far ends). In Residual vs Leverage image there are no outliers in isolated positions far from mean. Therefore assumption of linearity and normal distribution can be accepted.


Analysis of Alcohol consumption in two Portugese secondary schools


Data, hyphotheses and selected explanatory variables

The data of this study has been collected from two Portugese second grade schools by using school reports and questionnaires. Aim of this analysis is to find relationships between high alcohol consumption and hypothesized explanatory variables.

Hyphotheses:

  • Males consume more alcohol
  • It is more common to use alcohol with friends outside home
  • Good family relations decrease drinking
  • Drinking may cause absences

According to hyphotheses, the selected explanatory variables are sex, go out, family relations and absences. According to first hyphothesis, all analyses have been divided to two groups, females and males.

In dataset there are lot of background variables like demographic, social relationships, parents´education and employment, or living area as well as grades, absences or health.

Full descriptions of study is available here: https://archive.ics.uci.edu/ml/datasets/Student+Performance.


All variables and distributions

Observations about selected explanatory variables:

  • Sex: Student’s sex, 198 females, 184 males (‘F’ - female or ‘M’ - male)
  • Goout: Going out with friends, quite normally distributed, median 3 (1 - very low, 5 - very high)
  • Famrel: Quality of family relationships, negatively skewed, median 4 (1 - very bad, 5 - excellent)
  • Absences: Number of school absences, multimodal, not normally distributed, median 3 (min. 0 - max. 45)

For alcohol consumption related study, the original data has been modified by combining and averaging weekend and weekdays alcohol consumptions. As result two new variables have been created: alc_use (Likert, 1-5, where 1 = very low, 5 = very high) and high_use (logical, TRUE > 2).

Total number of variables after adding is 35. Number of observations is 382. All variables and their distributions can be seen below.


Distributions and observations of chosen variables

Relationship between alcohol consumption and sex

As seen in table below, in generally, males consume more alcohol. In low consumption group (very low or low = FALSE), 58% were females and 42% males, whereas in higher consumption group only 37% were females and 63% males.

     
      FALSE TRUE Sum
  F     156   42 198
  M     112   72 184
  Sum   268  114 382
Relationship between alcohol consumption and going out with friends

Having more freetime outside home with friends increases alcohol consumption in both sexes significantly.

Relationship between alcohol consumption and absences

Although absences increase due to higher alcohol consumption, difference is quite moderate, thus larger in males.

Relationship between alcohol consumption and family relations

Although same tendency occur also in this case, there also might be two-sided correlation between alcohol consumption and family relations. Which of them is more cause or consequence is unclear. Bad relationships in family may lead bigger alcohol consumption, but also drinking may evoke collisions between family members.


Logistic regression

In logistic binomial regression selected explanatory variables (discret or continuous) explain dependent binary (high_use) variable. In this case, dependent variable is categorical, alcohol high use (high_use). It has constructed by classifying aforementioned alc_use variable. Values 1 and 2 of alc_use have been classified to low or false and values 3 - 5 to high or true.

All chosen variables (sex, goout, famrel and absences) are statistically significant. Especially sex is most significant of all and its odd ratio was the highest (2.84). This means that propability for high alcohol consumption in males is almost three times higher than in females. Another strongly influencing variable is “Going out with friends” (goout). None of the included variables include 1 in confidental interval, which means they all have meaningful influence to the model.


Call:
glm(formula = high_use ~ famrel + absences + sex + goout, family = "binomial", 
    data = alc)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.7151  -0.7820  -0.5137   0.7537   2.5463  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -2.76826    0.66170  -4.184 2.87e-05 ***
famrel      -0.39378    0.14035  -2.806 0.005020 ** 
absences     0.08168    0.02200   3.713 0.000205 ***
sexM         1.01234    0.25895   3.909 9.25e-05 ***
goout        0.76761    0.12316   6.232 4.59e-10 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 465.68  on 381  degrees of freedom
Residual deviance: 379.81  on 377  degrees of freedom
AIC: 389.81

Number of Fisher Scoring iterations: 4
                    OR      2.5 %    97.5 %
(Intercept) 0.06277092 0.01648406 0.2225470
famrel      0.67449969 0.51060362 0.8869199
absences    1.08510512 1.04012840 1.1353940
sexM        2.75203774 1.66768032 4.6128191
goout       2.15461275 1.70345866 2.7639914

Predictive power of the model

With chosen variables, predictive power of model (penalty) is ~ 0.20 (TRUE-FALSE + FALSE-TRUE pairs summed). This means that about 4/5 of predictions are correct (TRUE-TRUE + FALSE-FALSE pairs summed).

        prediction
high_use      FALSE       TRUE        Sum
   FALSE 0.66492147 0.03664921 0.70157068
   TRUE  0.16753927 0.13089005 0.29842932
   Sum   0.83246073 0.16753927 1.00000000

Mean prediction error

Mean prediction error of the model (loss function). This is determined by calculating propability of incorrect predictions.

[1] 0.2041885

K-fold cross-validation

For testing power of the chosen model, a method called cross-validation can be used. In cross validation sample data has been divided to test and train sections. This means that for example data is divided to 5 equal size parts and 4/5 is used as train data and 1/5 as test data. That is called 5-fold cross-validation. In K-fold CV data has divided to K parts, in this case K = number of observations. So, train data includes K-1 observations and test data 1 observations. In this function this process is circulated so all parts of data is in turn either train or test data.

In this case, K-fold cross-validation gives slighly higher prediction error (0.21) compared to mean prediction error.

[1] 0.2146597

10-fold cross-validation on my model.

This model have smaller prediction error (0.21) using 10-fold cross-validation compared to the model introduced in DataCamp (which had about 0.26 error). This means that chosen variables in my model predict better high alcohol consumption.

[1] 0.2041885

Cross-validation to compare the performance of different logistic regression models

The highest amount of predictors is in my original model introduced earlier. Other models are reduced combinations of original model or added to the model previously unused variables (N=18). In 10-fold cross validation, the smallest penalty loss (0.21) was in my original model. Changing variables increased or decreased penalty loss depending on significancy of chosen variables.

                    OR      2.5 %    97.5 %
(Intercept) 0.06277092 0.01648406 0.2225470
famrel      0.67449969 0.51060362 0.8869199
absences    1.08510512 1.04012840 1.1353940
sexM        2.75203774 1.66768032 4.6128191
goout       2.15461275 1.70345866 2.7639914
[1] 0.2094241
                   OR     2.5 %    97.5 %
(Intercept) 0.4877368 0.1702903 1.3635949
famrel      0.7475910 0.5811204 0.9595347
absences    1.0980228 1.0524564 1.1504519
sexM        2.7976292 1.7476499 4.5467291
[1] 0.2565445
                   OR     2.5 %   97.5 %
(Intercept) 0.7187304 0.2621851 1.935500
famrel      0.7857743 0.6158556 1.001830
absences    1.0905430 1.0452818 1.142647
[1] 0.2827225
                   OR     2.5 %    97.5 %
(Intercept) 1.1938851 0.4610715 3.0668110
famrel      0.7673146 0.6041200 0.9734032
[1] 0.3010471
                    OR       2.5 %     97.5 %
(Intercept) 0.01555815 0.005885392 0.03804621
absences    1.08782902 1.042458467 1.13933894
sexM        2.60835292 1.593132148 4.33151387
goout       2.07468346 1.650182481 2.64111050
[1] 0.2198953
                   OR      2.5 %    97.5 %
(Intercept) 0.0917904 0.02572428 0.3085338
famrel      0.7047893 0.53909482 0.9181526
absences    1.0772983 1.03352289 1.1268709
goout       2.1613089 1.71931650 2.7529251
[1] 0.2539267
                   OR     2.5 %    97.5 %
(Intercept) 0.4877368 0.1702903 1.3635949
famrel      0.7475910 0.5811204 0.9595347
absences    1.0980228 1.0524564 1.1504519
sexM        2.7976292 1.7476499 4.5467291
[1] 0.2591623
                   OR     2.5 %    97.5 %
(Intercept) 0.2775097 0.2012914 0.3760343
absences    1.0939748 1.0479459 1.1468072
[1] 0.2827225
                    OR       2.5 %     97.5 %
(Intercept) 0.01555815 0.005885392 0.03804621
absences    1.08782902 1.042458467 1.13933894
sexM        2.60835292 1.593132148 4.33151387
goout       2.07468346 1.650182481 2.64111050
[1] 0.2225131
                    OR      2.5 %    97.5 %
(Intercept) 0.09489664 0.02602013 0.3241397
famrel      0.66248876 0.50299888 0.8679153
sexM        2.54505957 1.56349803 4.1969640
goout       2.21913298 1.76086023 2.8372779
[1] 0.2251309
                   OR     2.5 %    97.5 %
(Intercept) 0.4877368 0.1702903 1.3635949
famrel      0.7475910 0.5811204 0.9595347
absences    1.0980228 1.0524564 1.1504519
sexM        2.7976292 1.7476499 4.5467291
[1] 0.2617801
                   OR     2.5 %    97.5 %
(Intercept) 0.2692308 0.1891905 0.3746607
sexM        2.3877551 1.5268584 3.7716682
[1] 0.2984293
                   OR     2.5 %    97.5 %
(Intercept) 0.8774021 0.3274356 2.3203334
famrel      0.7331474 0.5730317 0.9356179
sexM        2.5191408 1.6005358 4.0108387
[1] 0.2879581
                  OR     2.5 %    97.5 %
(Intercept) 0.159445 0.1012577 0.2427684
absences    1.101409 1.0549317 1.1548057
sexM        2.658116 1.6710354 4.2863129
[1] 0.2591623
                    OR       2.5 %     97.5 %
(Intercept) 0.02208168 0.008754964 0.05186666
sexM        2.39559680 1.484457552 3.91015147
goout       2.14025612 1.708110942 2.71701439
[1] 0.2277487
                   OR      2.5 %    97.5 %
(Intercept) 0.1314829 0.03816207 0.4283774
famrel      0.6930400 0.53161279 0.8999454
goout       2.2081701 1.76229526 2.8045361
[1] 0.2565445
                  OR      2.5 %     97.5 %
(Intercept) 0.026083 0.01076768 0.05922061
absences    1.080069 1.03581041 1.13071158
goout       2.087359 1.66817170 2.64318173
[1] 0.2356021
                    OR      2.5 %     97.5 %
(Intercept) 0.03504298 0.01505792 0.07687797
goout       2.13743726 1.71351154 2.69965289
[1] 0.2696335
                     OR        2.5 %    97.5 %
(Intercept) 0.006582546 8.420856e-05 0.4625185
famrel      0.642908900 4.804204e-01 0.8557823
absences    1.084677514 1.038897e+00 1.1349851
sexM        2.362741264 1.404117e+00 4.0253561
G3          0.978352444 8.988314e-01 1.0653837
freetime    1.143635232 8.645141e-01 1.5163385
age         1.129138861 8.986345e-01 1.4213128
failures    1.194170133 7.676563e-01 1.8609556
famsizeLE3  1.416227556 8.057840e-01 2.4756274
romanticyes 0.627696639 3.507903e-01 1.1001635
health      1.145482751 9.468044e-01 1.3925122
goout       2.042410603 1.593579e+00 2.6564950
[1] 0.2408377


Analysis of Boston data

Overview of data

Boston data is included in R-package as a demonstartion or example.

Dataset contains social, environmental and economical information about great Boston area. It includes following variables:

  • crim = per capita crime rate by town
  • zn = proportion of residential land zoned for lots over 25,000 sq.ft.
  • indus = proportion of non-retail business acres per town
  • chas = Charles River dummy variable
  • nox = nitrogen oxides concentration (parts per 10 million)
  • rm = average number of rooms per dwelling
  • age = proportion of owner-occupied units built prior to 1940
  • dis = weighted mean of distances to five Boston employment centres
  • rad = index of accessibility to radial highways
  • tax = full-value property-tax rate per $10000
  • ptratio = pupil-teacher ratio by town
  • black = 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
  • lstat = lower status of the population (percent)
  • medv = median value of owner-occupied homes in $1000s

Structure and the dimensions of the data

Dataset has 14 variables and 506 observations and all variables are numerical.

'data.frame':   506 obs. of  14 variables:
 $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
 $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
 $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
 $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
 $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
 $ rm     : num  6.58 6.42 7.18 7 7.15 ...
 $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
 $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
 $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
 $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
 $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
 $ black  : num  397 397 393 395 397 ...
 $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
 $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
[1] 506  14

Summary, graphical presentation of data and correlations

As seen in pairs plot, most of the variables are not normally distributed. Most of them are skewed and some of them are bimodal. Correlations between variables are better viewed in correlation plotting, where on the upper-right side the biggest circles indicate highest correlations (blue = positive or red = negative). Corresponding number values are mirrored on the lower-left side.

      crim                zn             indus            chas        
 Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
 1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
 Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
 Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
 3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
 Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
      nox               rm             age              dis        
 Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
 1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
 Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
 Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
 3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
 Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
      rad              tax           ptratio          black       
 Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
 1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
 Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
 Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
 3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
 Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
     lstat            medv      
 Min.   : 1.73   Min.   : 5.00  
 1st Qu.: 6.95   1st Qu.:17.02  
 Median :11.36   Median :21.20  
 Mean   :12.65   Mean   :22.53  
 3rd Qu.:16.95   3rd Qu.:25.00  
 Max.   :37.97   Max.   :50.00  


Standardization

In standardization means of all variables are in zero. That is, variables have distributed around zero. This can be seen in summary table (compare with original summary above).

Variable crime rate has been changed to categorical variable with 4 levels: low, med_low, med_high and high. Each class includes quantile of data (25%).

Train and test sets have been created by dividing original (standardized) data to two groups randomly. 80% belongs to train set and 20% to test set.

      crim                 zn               indus        
 Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
 1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
 Median :-0.390280   Median :-0.48724   Median :-0.2109  
 Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
 3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
 Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
      chas              nox                rm               age         
 Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
 1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
 Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
 Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
 3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
 Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
      dis               rad               tax             ptratio       
 Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
 1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
 Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
 Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
 3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
 Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
     black             lstat              medv        
 Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
 1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
 Median : 0.3808   Median :-0.1811   Median :-0.1449  
 Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
 3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
 Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865  

Linear discriminant analysis (LDA)

In linear discriminant analysis (LDA) only the train set (80% of data) has been analysed. Target variable is the new categorical variable, crime rate (low, med_low, med_high, high). In LDA model all other variables of the data set are used as predictor variables (see Overview of data).

In biplot below can be seen that variable “rad” (index of accessibility to radial highways) has extremely high influence to LD1 and LD2 if compared to the other variables. In biplot all horizontal vectors describes contribution to LD1 dimension (x-axis) and vertical vectors LD2-dimension (y-axis). Sign of coefficient of linear discriminant determines the direction of vector. The longer the vector, the bigger is influence. Most of the vectors contribute both LD1 and LD2. Because in biplot two dimensions are illustrated, directions of most of variables are in different angles between LD1 and LD 2. For example, in the LDA table below the most significant variable of LD1 “rad” has coefficients LD1 = 3.27 and LD2 = 1.05. They are directly readable as coordinates of the arrow head. Similarly the second most significant variable of LD2, “nox” has its head ccordinates in (-0.69, 0.29). LDA1 alone explains 0.95% of model. LD2 explains 3% and LD3 only 1%.

Call:
lda(crime ~ ., data = train)

Prior probabilities of groups:
      low   med_low  med_high      high 
0.2549505 0.2326733 0.2673267 0.2450495 

Group means:
                 zn      indus        chas        nox         rm
low       0.8392452 -0.8723237 -0.19588052 -0.8657096  0.4665312
med_low  -0.1186793 -0.2522468  0.02085925 -0.5624452 -0.1355060
med_high -0.3824294  0.2295502  0.27449041  0.4428308  0.0880766
high     -0.4872402  1.0171737 -0.11325431  1.0466618 -0.4406297
                age        dis        rad        tax     ptratio
low      -0.8554088  0.7923799 -0.7020020 -0.7465975 -0.35391801
med_low  -0.3628659  0.3818522 -0.5579158 -0.5353510 -0.09542846
med_high  0.4489395 -0.3962550 -0.4374128 -0.3128013 -0.30621602
high      0.7881564 -0.8350609  1.6375616  1.5136504  0.78011702
              black        lstat         medv
low       0.3899628 -0.770486623  0.520764333
med_low   0.3282817 -0.129693372 -0.009928882
med_high  0.1001329  0.005276176  0.146943312
high     -0.9321488  0.885300771 -0.693397694

Coefficients of linear discriminants:
                LD1           LD2         LD3
zn       0.05838581  0.6154175404 -0.95523756
indus    0.03955203 -0.2260053201  0.58575725
chas    -0.10304041 -0.1616718185  0.10360468
nox      0.34477455 -0.7446405741 -1.16655784
rm      -0.15064811 -0.0751262139 -0.17277269
age      0.22556064 -0.4004580720 -0.17938152
dis     -0.04020800 -0.2349529817  0.40161500
rad      3.36926211  0.8607197240  0.30296814
tax      0.05890313  0.0003002091 -0.03228629
ptratio  0.12766783  0.0209446738 -0.37307757
black   -0.13254739 -0.0136185972  0.09527682
lstat    0.20993669 -0.1426921021  0.48656977
medv     0.21385719 -0.3242006759 -0.10063538

Proportion of trace:
   LD1    LD2    LD3 
0.9523 0.0364 0.0114 

Predictive power of the model

In the test dataset catecorigal crime variable has been removed. In the table below true values of the original test data and predicted values of the test data (crime removed) are cross-tabulated. Total amount of observations is 102 (506/5 +1). In the table on diagonal axis (from top-left corner) are true values (sum = 76) and all others are predicted values (sum = 26). Prediction error is 26/102 ≈ 0.25

          predicted
correct    low med_low med_high high Sum
  low       17       6        1    0  24
  med_low    6      15       11    0  32
  med_high   0       2       14    2  18
  high       0       0        0   28  28
  Sum       23      23       26   30 102

Calculation of distances between the observations and optimal number of clusters

In this model euclidean distance matrix has been calculated. Results can be seen in table below. By using K-means algorithm, the optimal number of clusters can be investigated. When TWSS (total within sum of squares) drops significally, it indicates optimal number of clusters. In this case optimal number of clusters is 2 or 3. In the first plotting, data has classified into two and in the second plotting three clusters.

   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
 0.1343  3.4625  4.8241  4.9111  6.1863 14.3970 

Bonus

Here LDA is calculated with the clusters as target classes. All other variables in the Boston data are predictor variables. In LDA tables and biplots, differences between number of clusters can be seen. Variable “rad” is the most influencial linear separator for the clusters in LD1 and variable “zn” in LD2. At the moment Knitting does not accept my code. Code is visible below. I tried to fix this later.

data(“Boston”) boston_scaled <- scale(Boston) boston_scaled <- as.data.frame(boston_scaled)

the function for lda biplot arrows

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = “red”, tex = 0.75, choices = c(1,2)){ heads <- coef(x) arrows(x0 = 0, y0 = 0, x1 = myscale * heads[,choices[1]], y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads) text(myscale * heads[,choices], labels = row.names(heads), cex = tex, col=color, pos=3) }

km3 <-kmeans(boston_scaled, centers = 3)

km4 <-kmeans(boston_scaled, centers = 4)

km5 <-kmeans(boston_scaled, centers = 5)

km6 <-kmeans(boston_scaled, centers = 6)

clu3 <- as.factor(km3$cluster)

clu4 <- as.factor(km4$cluster)

clu5 <- as.factor(km5$cluster)

clu6 <- as.factor(km6$cluster)

lda.fit3 <- lda(clu3 ~ ., data = boston_scaled)

lda.fit3

lda.fit4 <- lda(clu4 ~ ., data = boston_scaled)

lda.fit4

lda.fit5 <- lda(clu5 ~ ., data = boston_scaled)

lda.fit5

lda.fit6 <- lda(clu6 ~ ., data = boston_scaled)

lda.fit6

target classes as numeric

classes <- as.numeric(clu3)

plot(lda.fit3, dimen = 2, col = classes, pch = classes) lda.arrows(lda.fit, myscale = 1)

classes <- as.numeric(clu4) plot(lda.fit4, dimen = 2, col = classes, pch = classes) lda.arrows(lda.fit, myscale = 1)

classes <- as.numeric(clu5) plot(lda.fit5, dimen = 2, col = classes, pch = classes) lda.arrows(lda.fit, myscale = 1)

classes <- as.numeric(clu6) plot(lda.fit6, dimen = 2, col = classes, pch = classes) lda.arrows(lda.fit, myscale = 1)

Super-Bonus

Adjust the code: add argument color as a argument in the plot_ly() function. Set the color to be the crime classes of the train set. Draw another 3D plot where the color is defined by the clusters of the k-means. How do the plots differ? Are there any similarities?

      crim                 zn               indus        
 Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
 1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
 Median :-0.390280   Median :-0.48724   Median :-0.2109  
 Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
 3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
 Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
      chas              nox                rm               age         
 Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
 1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
 Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
 Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
 3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
 Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
      dis               rad               tax             ptratio       
 Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
 1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
 Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
 Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
 3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
 Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
     black             lstat              medv        
 Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
 1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
 Median : 0.3808   Median :-0.1811   Median :-0.1449  
 Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
 3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
 Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865  
Train data classified by Crime (1 = low, 4 = high)

Data points are of course in same positions. Grouping differs slightly in main group if colours are coded either by crime or by cluster. In the separate group high-crime is well isolated whereas in clusters, there are two of them. If colours are coded by crime, particularly the high-crime is better gathered to one group.

Train data classified by Clusters